15 research outputs found

    SNMP Trace Analysis: Results of Extra Traces

    Get PDF
    The Simple Network Management Protocol (SMMP) was introduced in the late 1980s. Since then, several evolutionary protocol changes have taken place, resulting in the SNMP version 3 framework (SNMPv3). Extensive use of SNMP has led to significant practical experience by both network operators and researchers. Since recently, researchers are in the possession of real world SNMP traces. This allows researchers to analyze the real world application of SNMP. A publication of 2007 made a significant start with this. However, real world trace analysis demands a continual approach, due to changing circumstances (e.g., regarding the network and SNMP engine implementations). Therefore, this paper reports on a lot more traces than in the mentioned paper, which are also more recent

    Evaluating Third-Party Bad Neighborhood Blacklists for Spam Detection

    Get PDF
    The distribution of malicious hosts over the IP address space is far from being uniform. In fact, malicious hosts tend to be concentrate in certain portions of the IP address space, forming the so-called Bad Neighborhoods. This phenomenon has been previously exploited to filter Spam by means of Bad Neighborhood blacklists. In this paper, we evaluate how much a network administrator can rely upon different Bad Neighborhood blacklists generated by third-party sources to fight Spam. One could expect that Bad Neighborhood blacklists generated from different sources contain, to a varying degree, disjoint sets of entries. Therefore, we investigate (i) how specific a blacklist is to its source, and (ii) whether different blacklists can be interchangeably used to protect a target from Spam. We analyze five Bad Neighborhood blacklists generated from real-world measurements and study their effectiveness in protecting three production mail servers from Spam. Our findings lead to several operational considerations on how a network administrator could best benefit from Bad Neighborhood-based Spam filtering

    Internet Bad Neighborhoods: the Spam Case

    Get PDF

    Internet Bad Neighborhoods Aggregation

    Get PDF
    Internet Bad Neighborhoods have proven to be an innovative approach for fighting spam. They have also helped to understand how spammers are distributed on the Internet. In our previous works, the size of each bad neighborhood was fixed to a /24 subnetwork. In this paper, however, we investigate if it is feasible to aggregate Internet bad neighborhoods not only at /24, but to any network prefix. To do that, we propose two different aggregation strategies: fixed prefix and variable prefix. The motivation for doing that is to reduce the number of entries in the bad neighborhood list, thus reducing memory storage requirements for intrusion detection solutions. We also introduce two error measures that allow to quantify how much error was incurred by the aggregation process. An evaluation of both strategies was conducted by analyzing real world data in our aggregation prototype

    Attacks by “Anonymous” WikiLeaks Proponents not Anonymous

    Get PDF
    On November 28, 2010, the world started watching the whistle blower website WikiLeaks to begin publishing part of the 250,000 US Embassy Diplomatic cables. These confidential cables provide an insight on U.S. international affairs from 274 different embassies, covering topics such as analysis of host countries and leaders and even requests for spying out United Nations leaders.\ud The release of these cables has caused reactions not only in the real world, but also on the Internet. In fact, a cyberwar started just before the initial release. Wikileaks has reported that their servers were experiencing distributed denial-of-service attacks (DDoS). A DDoS attack consists of many computers trying to overload a server by firing a high number of requests, leading ultimately to service disruption. In this case, the goal was to avoid the release of the embassy cables.\ud After the initial cable release, several companies started severed ties with WikiLeaks. One of the first was Amazon.com, that removed the WikiLeaks web- site from their servers. Next, EveryDNS, a company in which the domain wikileaks.org was registered, dropped the domain entries from its servers. On December 4th, PayPal cancelled the account that WikiLeaks was using to receive on-line donations. On the 6th, Swiss bank PostFinance froze the WikiLeaks assets and Mastercard stopped receiving payments to the WikiLeaks account. Visa followed Mastercard on December 7th.\ud These reactions caused a group of Internet activists (or “hacktivists”) named Anonymous to start a retaliation against PostFinance, PayPay, MasterCard, Visa, Moneybrookers.com and Amazon.com, named “Operation Payback”. The retaliation was performed as DDoS attacks to the websites of those companies, disrupting their activities (except for the case of Amazon.com) for different periods of time.\ud The Anonymous group consists of volunteers that use a stress testing tool to perform the attacks. This tool, named LOIC (Low Orbit Ion Cannon), can be found both as a desktop application and as a Web page.\ud Even though the group behind the attacks claims to be anonymous, the tools they provide do not offer any security services, such as anonymization. As a consequence, a hacktivist that volunteers to take part in such attacks, can be traced back easily. This is the case for both current versions of the LOIC tool. Therefore, the goal of this report is to present an analysis of privacy issues in the context of these attacks, and raise awareness on the risks of taking part in them

    Internet Bad Neighborhoods

    Get PDF
    A significant part of current Internet attacks originates from hosts that are distributed all over the Internet. However, there is evidence that most of these hosts are, in fact, concentrated in certain parts of the Internet. This behavior resembles the crime distribution in the real world: it occurs in most places, but it tends to be concentrated in certain areas. In the real world, high crime areas are usually labeled as “bad neighborhoods‿. The goal of this dissertation is to investigate Bad Neighborhoods on the Internet. The idea behind the Internet Bad Neighborhood concept is that the probability of a host in behaving badly increases if its neighboring hosts (i.e., hosts within the same subnetwork) also behave badly. This idea, in turn, can be exploited to improve current Internet security solutions, since it provides an indirect approach to predict new sources of attacks (neighboring hosts of malicious ones). In this context, the main contribution of this dissertation is to present the first systematic and multifaceted study on the concentration of malicious hosts on the Internet. We have organized our study according to two main research questions. In the first research question, we have focused on the intrinsic characteristics of the Internet Bad Neighborhoods, whereas in the second research question we have focused on how Bad Neighborhood blacklists can be employed to better protect networks against attacks. The approach employed to answer both questions consists in monitoring and analyzing network data (traces, blacklists, etc.) obtained from various real world production networks. One of the most important findings of this dissertation is the verification that Internet Bad Neighborhoods are a real phenomenon, which can be observed not only as network prefixes (e.g., /24, in CIDR notation), but also at different and coarser aggregation levels, such as Internet Service Providers (ISPs) and countries. For example, we found that 20 ISPs (out of 42,201 observed in our data sets) concentrated almost half of all spamming IP addresses. In addition, a single ISP was found having 62% of its IP addresses involved with spam. This suggests that ISP-based Bad Neighborhood security mechanisms can be employed when evaluating e-mail from unknown sources. This dissertation also shows that Bad Neighborhoods are mostly applicationspecific and that they might be located in neighborhoods one would not immediately expect. For example, we found that phishing Bad Neighborhoods are mostly located in the United States and other developed nations – since these nations hosts the majority of data centers and cloud computing providers – while spam comes from mostly Southern Asia. This implies that Bad Neighborhood based security tools should be application-tailored. Another finding of this dissertation is that Internet Bad Neighborhoods are much less stealthy than individual hosts, since they are more likely to strike again a target previously attacked. We found that, in a one-week period, nearly 50% of the individual IP addresses attack only once a particular target, while up to 90% of the Bad Neighborhoods attacked more than once. Consequently, this implies that historical data of Bad Neighborhoods attacks can potentially be successfully employed to predict future attacks. Overall, we have put the Internet Bad Neighborhoods under scrutiny from the point of view of the network administrator. We expect that the findings provided in this dissertation can serve as a guide for the design of new algorithms and solutions to better secure networks

    An approach to measure the complexity and estimate the cost associated to Information Technology Security Procedures

    No full text
    Segurança de TI tornou-se nos últimos anos uma grande preocupação para empresas em geral. Entretanto, não é possível atingir níveis satisfatórios de segurança sem que estes venham acompanhados tanto de grandes investimentos para adquirir ferramentas que satisfaçam os requisitos de segurança quanto de procedimentos, em geral, complexos para instalar e manter a infra-estrutura protegida. A comunidade científica propôs, no passado recente, modelos e técnicas para medir a complexidade de procedimentos de configuração de TI, cientes de que eles são responsáveis por uma parcela significativa do custo operacional, freqüentemente dominando o total cost of ownership. No entanto, apesar do papel central de segurança neste contexto, ela não foi objeto de investigação até então. Para abordar este problema, neste trabalho aplica-se um modelo de complexidade proposto na literatura para mensurar o impacto de segurança na complexidade de procedimentos de TI. A proposta deste trabalho foi materializada através da implementação de um protótipo para análise de complexidade chamado Security Complexity Analyzer (SCA). Como prova de conceito e viabilidade de nossa proposta, o SCA foi utilizado para avaliar a complexidade de cenários reais de segurança. Além disso, foi conduzido um estudo para investigar a relação entre as métricas propostas no modelo de complexidade e o tempo gasto pelo administrador durante a execução dos procedimentos de segurança, através de um modelo quantitativo baseado em regressão linear, com o objetivo de prever custos associados à segurança.IT security has become over the recent years a major concern for organizations. However, it doest not come without large investments on both the acquisition of tools to satisfy particular security requirements and complex procedures to deploy and maintain a protected infrastructure. The scientific community has proposed in the recent past models and techniques to estimate the complexity of configuration procedures, aware that they represent a significant operational cost, often dominating total cost of ownership. However, despite the central role played by security within this context, it has not been subject to any investigation to date. To address this issue, we apply a model of configuration complexity proposed in the literature in order to be able to estimate security impact on the complexity of IT procedures. Our proposal has been materialized through a prototypical implementation of a complexity scorer system called Security Complexity Analyzer (SCA). To prove concept and technical feasibility of our proposal, we have used SCA to evaluate real-life security scenarios. In addition, we have conducted a study in order to investigate the relation between the metrics proposed in the model and the time spent by the administrator while executing security procedures, with a quantitative model built using multiple regression analysis, in order to predict the costs associated to security

    Fighting Spam on the Sender Side: A Lightweight Approach

    Get PDF
    Spam comprises approximately 90 to 95 percent of all e-mail traffic on the Internet nowadays, and it is a problem far from being solved. The losses caused by spam are estimated to reach up to \$87 billion yearly. When fighting spam, most of the proposals focus on the receiver-side and are usually resource-intensive in terms of processing requirements. In this paper we present an approach to address these shortcomings: we propose to i) filter outgoing e-mail on the sender side and (ii) use lightweight techniques to check whether a message is spam or not. Moreover, we have evaluated the accuracy of these techniques in detecting spam on the sender side with two different data sets, obtained at a Dutch hosting provider. The results obtained with this approach suggest we can significantly reduce the amount of spam on the Internet by performing simple checks at the sender-side

    Internet Bad Neighborhoods: the Spam Case

    No full text
    A significant part of current attacks on the Internet comes from compromised hosts that, usually, take part in botnets. Even though bots themselves can be distributed all over the world, there is evidence that most of the malicious hosts are, in fact, concentrated in small fractions of the IP address space, on certain networks. Based on that, the Bad Neighborhood concept was introduced. The general idea of Bad Neighborhoods is to rate a subnetwork by the number of malicious hosts that have been observed in that subnetwork. Even though Bad Neighborhoods were successfully employed in mail filtering, the very concept was not investigated in further details. Therefore, in this work we provide a closer look on it, by proposing four definitions for spam-based Bad Neighborhoods that take into account the way spammers operate. We apply the definitions to real world data sets and show that they provide valuable insight into the behavior of spammers and the networks hosting them. Among our findings, we show that 10% of the Bad Neighborhoods are responsible for the majority of spam
    corecore